Session B-7

PHY Networking

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 5:30 AM — 7:00 AM PDT
Location
Babbio 104

Transfer Beamforming via Beamforming for Transfer

Xueyuan Yang, Zhenlin An and Xiaopeng Zhao (The Hong Kong Polytechnic University, Hong Kong); Lei Yang (The Hong Kong Polytechnic University, China)

0
Although billions of battery-free backscatter devices (e.g., RFID tags) are intensively deployed nowadays, they are still unsatisfying in performance limitations (i.e., short reading range and high miss-reading rate) resulting from power harvesting inefficiency. However, applying classic beamforming technique to backscatter systems meets the deadlock start problem, i.e., without enough power, the backscatter cannot wake up to provide channel parameters; but, without channel parameters, the system cannot form beams to provide power. In this work, we propose a new beamforming paradigm called transfer beamforming (TBF), namely, beamforming strategies can be transferred from reference tags with known positions to power up unknown neighbor tags of interest. Transfer beamforming (is accomplished) via (launching) beamforming (to reference tags firstly) for (the purpose of) transfer. To do so, we adopt semi-active tags as reference tags, which can be easily powered up with a normal reader. Then beamforming is initiated and transferred to power up passive tags surrounded by reference tags. A prototype evaluation of TBF with 8 antennas presents a 99.9% inventory coverage rate in a crowded warehouse with 2,160 RFID tags. Our evaluation reveals that TBF improves the power transmission by 6.9 dB and boosts the inventory speed by 2× compared with state-of-art methods.
Speaker Xueyuan Yang (Hong Kong Polytechnic University)

My research interest is beamforming and the Internet of Things.


Prism: High-throughput LoRa Backscatter with Non-linear Chirps

Yidong Ren and Puyu Cai (Michigan State University, USA); Jinyan Jiang and Jialuo Du (Tsinghua University, China); Zhichao Cao (Michigan State University, USA)

1
LoRa backscatter is known as its low-power and long-range communication. In addition, concurrent transmissions from LoRa backscatter device are desirable to enable large- scale backsatter networks. However, linear chirp based LoRa signals easily interfere with each other degrading the throughput of concurrent backscatter. In this paper, we propose Prism that utilizes different types of non-linear chirps to represent the backscattered data allowing multiple backscatter devices to transmit concurrently in the same channel. By taking the commercial-off-the-shelf (COTS) LoRa linear chirps as excitation source, how to convert a linear chirp to its non-linear counterpart is not trivial on resource-limited backscatter devices. To mitigate this challenge, we design the delicate error function and control the timer to trigger accurate frequency shift. We implement Prism with customized low-cost hardware, processing the signal with USRP and evaluate its performance in both indoor and outdoor environments. The measurement results and emulation data show that Prism achieves the highest 560kps throughput and supports 40 Prism tags transmit concurrently with 1% bit error rate in the same physical channel which is 40× of state-of-the-art.
Speaker Yidong Ren (Michigan State University)

Yidong Ren is a second year PhD student in Michigan State University.


CSI-StripeFormer: Exploiting Stripe Features for CSI Compression in Massive MIMO System

Qingyong Hu (Hong Kong University of Science and Technology, Hong Kong); Hua Kang (HKUST, Hong Kong); Huangxun Chen (Huawei, Hong Kong); Qianyi Huang (Southern University of Science and Technology & Peng Cheng Laboratory, China); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong); Min Cheng (Noah's Ark Lab, Huawei, Hong Kong)

1
The massive MIMO gain for wireless communication has been greatly hindered by the feedback overhead of channel state information (CSI) growing linearly with the number of antennas. Recent efforts leverage DNN-based encoder-decoder framework to exploit correlations within CSI matrix for better CSI compression. However, existing works did not fully exploit unique features of CSI, resulting in unsatisfying performance under high compression ratios and being sensitive to multipath effects. Instead of treating CSI as ordinary 2D matrices like images, we reveal the intrinsic stripe-based correlation across CSI matrix. Driven by this insight, we propose CSI-StripeFormer, a stripe-aware encoder-decoder framework to exploit the unique stripe feature for better CSI compression. We design a lightweight encoder with asymmetric convolution kernels to capture various shape features. We further incorporate novel designs tailored for stripe features, including a novel hierarchical Transformer backbone in the decoder and a hybrid attention mechanism to extract and fuse correlations in angular and delay domains. Our evaluation results show that our system achieves over 7dB channel reconstruction gain under a high compression ratio of 64 in multipath-rich scenarios, significantly superior to state-of-the-art approaches. This gain can be further improved to 17dB given the extended embedded dimension of our backbone.
Speaker Qingyong Hu (Hong Kong University of Science and Technology)

Qingyong Hu is a PhD Student at Hong Kong University of Science and Technology. He is currently working on bringing artificial intelligence into IoT world, such as optimizing IoT systems with advanced algorithms and developing novel sensing systems. His research interests include but not limited to AIoT, smart healthcare and system optimization.



RIS-STAR: RIS-based Spatio-Temporal Channel Hardening for Single-Antenna Receivers

Sara Garcia Sanchez and Kubra Alemdar (Northeastern University, USA); Vini Chaudhary (Northeastern University, Boston, MA, US, USA); Kaushik Chowdhury (Northeastern University, USA)

0
Small form-factor single antenna devices, typically deployed within wireless sensor networks, lack many benefits of multi-antenna receivers like leveraging spatial diversity to enhance signal reception reliability. In this paper, we introduce the theory of achieving spatial diversity in such single-antenna systems by using reconfigurable intelligent surfaces (RIS). Our approach, called 'RIS-STAR', proposes a method of proactively perturbing the wireless propagation environment multiple times within the symbol time (that is less than the channel coherence time) through reconfiguring an RIS. By leveraging the stationarity of the channel, RIS-STAR ensures that the only source of perturbation is due to the chosen and control-able RIS configuration. We first formulate the problem to find the set of RIS configurations that maximizes channel hardening, which is a measure of link reliability. Our solution is independent of the transceiver's relative location with respect to the RIS and does not require channel estimation, alleviating two key implementation concerns. We then evaluate the performance of RIS-STAR using a custom-simulator and an experimental testbed composed of PCB-fabricated RIS. Specifically, we demonstrate how a SISO link can be enhanced to perform similar to a SIMO link attaining an 84.6% channel hardening improvement in presence of strong multipath and non-line-of-sight conditions.
Speaker Northeastern University

Sara Garcia Sanchez received the B.S. and M.S. degrees in Electrical Engineering from Universidad Politecnica de Madrid in 2016 and 2018, respectively, and the Ph.D. in Computer Engineering from Northeastern University, Boston, MA, in 2022. She currently holds a position as Research Scientist at the IBM Thomas J. Watson Research Center, NY. Her research interests include mmWave communications, reconfigurable intelligent surfaces and 5G standards.


Session Chair

Parth Pathak

Session B-8

Scheduling

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 7:30 AM — 9:00 AM PDT
Location
Babbio 104

Target Coverage and Connectivity in Directional Wireless Sensor Networks

Tan D Lam and Dung Huynh (University of Texas at Dallas, USA)

0
This paper discusses the problem of deploying a minimum number of directional sensors equipped with directional sensing units and directional communication antennas with beam-width \(\theta_c \geq \frac{\pi}{2}\) such that the set of sensors covers a set of targets $P$ in the 2D plane and the set of sensors forms a symmetric connected communication graph. As this problem is NP-hard, we propose an approximation algorithm that uses up to 3.5 times the number of omni-directional sensors required by the currently best approximation algorithm proposed by Han~\etal~\cite{han2008DirectionalConnectivityCoverage}. This is a significant result since we have broken the barrier of \(2\pi/\frac{\pi}{2} = 4\) when switching from omni-directional sensors to directional ones. Moreover, we improve the approximation ratio of the Strip-based algorithm for the Geometric Sector Cover problem proposed by Han~\etal~\cite{han2008DirectionalConnectivityCoverage} from 9 to 7, and we believe that this result is of interest in the area of Computation Geometry. Extensive simulations show that our algorithms only require around 3 times the number of sensors used by Han~\etal's algorithm and significantly outperform other heuristics in practice.
Speaker Tan Lam (The University of Texas at Dallas)

Tan Lam is currently a PhD student in Computer Science at The University of Texas at Dallas. He got an honor bachelor degree in Computer Science from Ho Chi Minh city University of Science. His research interest is the design and analysis of combinatorial optimization algorithms in Wireless Sensor Networks.


Eywa: A General Approach for Scheduler Design in AoI Optimization

Chengzhang Li, Shaoran Li, Qingyu Liu, Thomas Hou and Wenjing Lou (Virginia Tech, USA); Sastry Kompella (NEXCEPTA INC, USA)

1
Age of Information (AoI) is a metric that can be used to measure the freshness of information. Since its inception, there have been active research efforts on designing scheduling algorithms to various AoI-related optimization problems. For each problem, typically a custom-designed scheduler was developed. Instead of following the (custom-design) path, we pursue a general framework that can be applied to design a wide range of schedulers to solve AoI-related optimization problems. As a first step toward this vision, we present a general framework---Eywa, that can be applied to construct high-performance schedulers for a family of AoI-related optimization problems, all sharing a common setting of an IoT data collection network. We show how to apply Eywa to solve two important AoI-related problems: to minimize the weighted sum of AoIs and to minimize the bandwidth requirement under AoI constraints. We show that for each problem, Eywa can either offer a stronger performance guarantee than the state-of-the-art algorithms or provide new results that are not available in the literature.
Speaker Chengzhang Li (Ohio State University)

Chengzhang is currently a postdoc at AI-EDGE Institute, Ohio State University, supervised by Prof. Ness Shroff. He received his Ph.D. degree in Computer Engineering from Virginia Tech in 2022, supervised by Prof. Tom Hou. He received his B.S. degree in Electronic Engineering from Tsinghua University in 2017. His current research interests are real-time scheduling in 5G, Age of Information (AoI), and machine learning in wireless networks.


Dynamic Resource Allocation for Deep Learning Clusters with Separated Compute and Storage

Mingxia Li (University of Science and Technology of China, China); Zhenhua Han (Microsoft Research Asia, China); Chi Zhang (University of Science and Technology of China, China); Ruiting Zhou (Southeast University, China); Yuanchi Liu and Haisheng Tan (University of Science and Technology of China, China)

0
The separation of storage and computing in modern cloud services eases the deployment of general applications. However, with the development of accelerators such as GPU/TPU, Deep Learning (DL) training is suffering from potential IO bottlenecks when loading data from storage clusters. Therefore, DL training jobs need either create local cache in the compute cluster to reduce the bandwidth demand or scale up the IO capacity with higher bandwidth cost. It is full of challenges to choose the best strategy due to the heterogeneous cache/IO preference of DL models, shared dataset among multiple jobs and dynamic GPU scaling of DL training. In this work, we exploit the job characteristics based on their training throughput, dataset size and scalability. For fixed GPU allocation of jobs, we propose CBA to minimize the training cost with a closed-form approach. For clusters that can automatically scale the GPU allocations of jobs, we extend CBA to AutoCBA to support diverse job utility functions and maximize social welfare within a limited budget. Extensive experiments with production traces validate that CBA and AutoCBA can reduce IO cost and improve total social welfare by up to 20.5% and 2.27×, respectively, over the state-of-the-art schedulers for DL training.
Speaker Mingxia Li(University of Science and Technology)

Mingxia Li is currently a postgraduate student in computer science at the University of Science and Technology of China. Her research interests lie in networking algorithm and systems. 


LIBRA: Contention-Aware GPU Thread Allocation for Data Parallel Training in High Speed Networks

Yunzhuo Liu, Bo Jiang and Shizhen Zhao (Shanghai Jiao Tong University, China); Tao Lin (Communication University of China, China); Xinbing Wang (Shanghai Jiaotong University, China); Chenghu Zhou (Chinese Academy of Sciences, China)

1
Overlapping gradient communication with backward computation is a popular technique to reduce communication cost in the widely adopted data parallel S-SGD training. However, the resource contention between computation and All-Reduce communication in GPU-based training reduces the benefits of overlap. With GPU cluster network evolving from low bandwidth TCP to high speed networks, more GPU resources are required to efficiently utilize the bandwidth, making the contention more noticeable. Existing communication libraries fail to account for such contention when allocating GPU threads and have suboptimal performance. In this paper, we propose to mitigate the contention by balancing the computation and communication time. We formulate an optimization problem that decides the communication thread allocation to reduce overall backward time. We develop a dynamic programming based near-optimal solution and extend it to co-optimize thread allocation with tensor fusion. We conduct simulated study and real-world experiment using an 8-node GPU cluster with 50Gb RDMA network training four representative DNN models. Results show that our method reduces backward time by 10%-20% compared with Horovod-NCCL, by 6%-13% compared with tensor-fusion-optimization-only methods. Simulation shows that our method achieves the best scalability with a training speedup of 1.2x over the best-performing baseline as we scale up cluster size.
Speaker Yunzhuo Liu (Shanghai Jiao Tong University)

Yunzhuo Liu received his B.S. degree from Shanghai Jiao Tong University, where he is currently pursuing the Ph.D. degree in John Hopcroft Center. He has published papers in top-tier conferences, including SIGMETRICS, INFOCOM, ACM MM and ICNP. His research interests include distributed training and programmable networks.


Session Chair

Ben Liang

Session B-9

Wireless Systems

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM PDT
Location
Babbio 104

NFChain: A Practical Fingerprinting Scheme for NFC Tag Authentication

Yanni Yang (Shandong University, China); Jiannong Cao (Hong Kong Polytechnical University, Hong Kong); Zhenlin An (The Hong Kong Polytechnic University, Hong Kong); Yanwen Wang (Hunan University, China); Pengfei Hu and Guoming Zhang (Shandong University, China)

0
NFC tag authentication is highly demanded to avoid tag abuse. Recent fingerprinting methods employ the physical-layer signal, which embeds the tag hardware imperfections for authentication. However, existing NFC fingerprinting methods suffer from either low scalability for a large number of tags or incompatibility with NFC protocols, impeding the practical application of NFC authentication systems. To fill this gap, we propose NFChain, a new NFC fingerprinting scheme that excavates the tag hardware uniqueness from the protocol-agnostic tag response signal. Specifically, we harness an agile and compatible frequency band of NFC to extract the tag fingerprint from a chain of tag responses over multiple frequencies, which significantly improves the fingerprint scalability. However, extracting the desired fingerprint encounters two practical challenges: (1) fingerprint inconsistency under different device configurations and (2) fingerprint variations across multiple measurements of the same tag due to noises in the generic device. To tackle these challenges, we first design an effective nulling method to eliminate the effect of device configurations. Second, we employ contrastive learning to reduce fingerprint variations for accurate authentication. Extensive experiments show we can achieve as low as 3.7% FRR and 4.1% FAR for over 600 tags.
Speaker Zhenlin An (The Hong Kong Polytechnic University)

Zhenlin An is a postdoc from The Hong Kong Polytechnic University. His research interests are in wireless sensing and communication, metasurface, and low-power IoT systems. He is currently on the job market.


ICARUS: Learning on IQ and Cycle Frequencies for Detecting Anomalous RF Underlay Signals

Debashri Roy (Northeastern University, USA); Vini Chaudhary (Northeastern University, Boston, MA, US, USA); Chinenye Tassie (Northeastern University, USA); Chad M Spooner (NorthWest Research Associates, USA); Kaushik Chowdhury (Northeastern University, USA)

0
The RF environment in a secure space can be compromised by intentional transmissions of hard-to-detect underlay signals that overlap with a high-power baseline transmission. Specifically, we consider the case where a direct sequence spread spectrum (DSSS) signal is the underlay signal hiding within a baseline 4G Long-Term Evolution (LTE) signal. As compared to overt actions like jamming, the DSSS signal allows the LTE signal to be decodable, which makes it hard to detect. ICARUS presents a machine learning based framework that offers choices at the physical layer for inference with inputs of (i) IQ samples only, (ii) cycle frequency features obtained via cyclostationary signal processing (CSP), and (iii) fusion of both, to detect the underlay DSSS signal and its modulation type within LTE frames. ICARUS chooses the best inference method considering both the expected accuracy and the computational overhead. ICARUS is rigorously validated on multiple real-world datasets that include signals captured in cellular bands in the wild and the NSF POWDER testbed for advanced wireless research (PAWR). Results reveal that ICARUS can detect DSSS anomalies and its modulation scheme with 98-100% and 67−99% accuracy, respectively, while completing inference within 3−40 milliseconds on an NVIDIA A100 GPU platform.
Speaker Debashri Roy (Northeastern University)

Debashri Roy is an associate research scientist at the Department of Electrical and Computer Engineering, Northeastern University. She received her Ph.D. in Computer Science from the University of Central Florida in May 2020. Her research interests involve machine learning based applications in wireless communication domain, targeted to the areas of deep spectrum learning, millimeter wave beamforming, multimodal fusion, networked robotics for next-generation communication. 


WSTrack: A Wi-Fi and Sound Fusion System for Device-free Human Tracking

Yichen Tian, Yunliang Wang, Ruikai Zheng, Xiulong Liu, Xinyu Tong and Keqiu Li (Tianjin University, China)

0
The voice assistants benefit from the ability to localize users, especially, we can analyze the user's habits from the historical trajectory to provide better services. However, current voice localization method requires the user to actively issue voice commands, which makes voice assistants unable to track silent users most of the time. This paper presents WSTrack, a Wi-Fi and Sound fusion tracking system for device-free human. In particular, current voice assistants naturally support both Wi-Fi and acoustic functions. Accordingly, we are able to build up the multi-modal prototype with just voice assistants and Wi-Fi routers. To track the movement of silent users, our insights are as follows: (1) the voice assistants can hear the sound of the user's pace, and then extract which direction the user is in; (2) we can extract the user's velocity from the Wi-Fi signal. By fusing multi-modal information, we are able to track users with a single voice assistant and Wi-Fi router. Our implementation and evaluation on commodity devices demonstrate that WSTrack achieves better performance than current systems, where the median tracking error is 0.37m.
Speaker Yichen Tian (Tianjin University)

Yichen Tian is currently working toward the master’s degree at the College of Intelligence and Computing, Tianjin University, China. His research interests include wireless sensing and indoor localization.


SubScatter: Subcarrier-Level OFDM Backscatter

Jihong Yu (Beijing Institute of Technology, China); Caihui Du (Beijing Institute of Techology, China); Jiahao Liu (Beijing Institute of Technology, China); Rongrong Zhang (Capital Normal University, China); Shuai Wang (Beijing Institute of Technology, China)

0
OFDM backscatter is crucial in passive IoT. Most of the existing works adopt phase-modulated schemes to embed tag data, which suffer from three drawbacks: symbol-level modulation limitation, heavy synchronization accuracy reliance, and small symbol time offset (STO) / carrier frequency (CFO) offset tolerability. We introduce SubScatter, the first subcarrier-level frequency-modulated OFDM backscatter which is able to tolerate bigger synchronization errors, STO, and CFO. The unique feature that sets SubScatter apart from the other backscatter systems is our subcarrier shift keying (SSK) modulation. This method pushes the modulation granularity to the subcarrier by encoding and mapping tag data into different subcarrier patterns. We also design a tandem frequency shift (TFS) scheme that enables SSK with low cost and low power. For decoding, we propose a correlation-based method that decodes tag data from the correlation between the original and backscatter OFDM symbols. We prototype and test SubScatter under 802.11g OFDM WiFi signals. Comprehensive evaluations show that our SubScatter outstands prior works in terms of effectiveness and robustness. Specifically, SubScatter has 743kbps throughput, 3.1x and 14.9x higher than RapidRider and MOXcatter, respectively. It also has a much lower BER under noise and interferences, which is over 6x better than RapidRider or MOXcatter.
Speaker Jihong Yu (Beijing Institute of Technology)

Jihong Yu received the B.E degree in communication engineering and M.E degree in communication and information systems from Chongqing University of Posts and Telecommunications, Chongqing, China, in 2010 and 2013, respectively, and the Ph.D. degree in computer science at the University of Paris-Sud, Orsay, France, in 2016. He was a postdoc fellow in the School of Computing Science, Simon Fraser University, Canada. He is currently a professor in the School of Information and Electronics at Beijing Institute of Technology. His research interests include backscatter networking, Internet of things, and Space-air communications. He is serving as an Area Editor for the Elsevier Computer Communications and an Associate Editor for the IEEE Internet of Things Journal and the IEEE Transactions on Vehicular Technology. He received the Best Paper Award at the IEEE Global Communications Conference (GLOBECOM) 2020.


Session Chair

Alex Sprintson

Session B-10

Learning

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 12:30 PM — 2:00 PM PDT
Location
Babbio 104

Learning to Schedule Tasks with Deadline and Throughput Constraints

Qingsong Liu and Zhixuan Fang (Tsinghua University, China)

0
We consider the task scheduling scenario where the controller activates one from $K$ task types at each time. Each task induces a random completion time, and a reward is obtained only after the task is completed. The statistics of the completion time and the reward distributions of all task types are unknown to the controller. The controller needs to learn to schedule tasks to maximize the accumulated reward within a given time horizon T. Motivated by the practical scenarios, we require the designed policy to satisfy a system throughput constraint. In addition, we introduce the interruption mechanism to terminate ongoing tasks that last longer than certain deadlines. To address this scheduling problem, we model it as an online learning problem with deadline and throughput constraints. Then, we characterize the optimal offline policy and develop efficient online learning algorithms based on the Lyapunov method. We prove that our online learning algorithm achieves an \(\sqrt{T}\) regret and zero constraint violations. We also conduct simulations to evaluate the performance of our developed learning algorithms.
Speaker Qingsong Liu (Tsinghua University)

Qingsing Liu received the B.Eng. degree in electronic engineering from Tsinghua University, China. Now he is

currently pursuing the Ph.D. degree with the Institute for Interdisciplinary Information Sciences (IIIS) of Tsinghua University. His research interests include online learning, and networked and computer systems modeling and optimization. He has published several papers in IEEE Globecom, IEEE ICASSP, IEEE WiOpt, IEEE INFOCOM, ACM/IFIP Performance, and NeurIPS.


A New Framework: Short-Term and Long-Term Returns in Stochastic Multi-Armed Bandit

Abdalaziz Sawwan and Jie Wu (Temple University, USA)

0
Stochastic Multi-Armed Bandit (MAB) has recently been studied widely due to its vast range of applications. The classical model considers the reward of a pulled arm to be observed after a time delay that is sampled from a random distribution assigned for each arm. In this paper, we propose an extended framework in which pulling an arm gives both an instant (short-term) reward and a delayed (long-term) reward at the same time. The distributions of reward values for short-term and long-term rewards are related with a previously known relationship. The distribution of time delay for an arm is independent of the reward distributions of the arm. In our work, we devise three UCB-based algorithms, where two of them are near-optimal-regret algorithms for this new model, with the corresponding regret analysis for each one of them. Additionally, the random distributions for time delay values are allowed to yield infinite time, which corresponds to a case where the arm only gives a short-term reward. Finally, we evaluate our algorithms and compare this paradigm with previously known models on both a synthetic data set and a real data set that would reflect one of the potential applications of this model.
Speaker Abdalaziz Sawwan (Temple University)

Abdalaziz Sawwan is a third-year Ph.D. student in Computer and Information Sciences at Temple University. He is a Research Assistant at the Center for Networked Computing. Sawwan received his bachelor’s degree in Electrical Engineering from the University of Jordan in 2020. His current research interests include multi-armed bandits, communication networks, mobile charging, and wireless networks.


DeepScheduler: Enabling Flow-Aware Scheduling in Time-Sensitive Networking

Xiaowu He, Xiangwen Zhuge, Fan Dang, Wang Xu and Zheng Yang (Tsinghua University, China)

2
Time-Sensitive Networking (TSN) has been considered the most promising network paradigm for time-critical applications (e.g., industrial control) and traffic scheduling is the core of TSN to ensure low latency and determinism. With the demand for flexible production increases, industrial network topologies and settings change frequently due to pipeline switches. As a result, there is a pressing need for a more efficient TSN scheduling algorithm. In this paper, we propose DeepScheduler, a fast and scalable flow-aware TSN scheduler based on deep reinforcement learning. In contrast to prior work that heavily relies on expert knowledge or problem-specific assumptions, DeepScheduler automatically learns effective scheduling policies from the complex dependency among data flows. We design a scalable neural network architecture that can process arbitrary network topologies with informative representations of the problem, and decompose the problem decision space for efficient model training. In addition, we develop a suite of TSN-compatible testbeds with hardware-software co-design and DeepScheduler integration. Extensive experiments on both simulation and physical testbeds show that DeepScheduler runs >150/5 times faster and improves the schedulability by 36%/39% compared to state-of-the-art heuristic/expert-based methods. With both efficiency and effectiveness, DeepScheduler makes scheduling no longer an obstacle towards flexible manufacturing.
Speaker Xiaowu He (Tsinghua university)

Xiaowu He is a PhD candidate in School of Software, Tsinghua University, under the supervision of Prof. Zheng Yang. He received his B.E. degree in School of Computer Science and Engineering from University of Electronic Science and Technology of China in 2019. His research interests include Time-Sensitive Networking, edge computing, and Internet of Things.


The Power of Age-based Reward in Fresh Information Acquisition

Zhiyuan Wang, Qingkai Meng, Shan Zhang and Hongbin Luo (Beihang University, China)

0
Many Internet platforms collect fresh information of various points of interest (PoIs) relying on users who happen to be nearby the PoIs. The platform will offer reward to incentivize users and compensate their costs incurred from information acquisition. In practice, the user cost (and its distribution) is hidden to the platform, thus it is challenging to determine the optimal reward. In this paper, we investigate how the platform dynamically rewards the users, aiming to jointly reduce the age of information (AoI) and the operational expenditure (OpEx). Due to the hidden cost distribution, this is an online non-convex learning problem with partial feedback. To overcome the challenge, we first design an age-based rewarding scheme, which decouples the OpEx from the unknown cost distribution and enables the platform to accurately control its OpEx. We then take advantage of the age-based rewarding scheme and propose an exponentially discretizing and leaning (EDAL) policy for platform operation. We prove that the EDAL policy performs asymptotically as well as the optimal decision (derived based on the cost distribution). Simulation results show that the age-based rewarding scheme protects the platform's OpEx from the influence of the user characteristics, and verify the asymptotic optimality of the EDAL policy.
Speaker Zhiyuan Wang (Beihang University)

Zhiyuan Wang is an associated professor with the School of Computer Science and Engneering in Beihang University. He received the PhD from The Chinese University and Hong Kong (CUHK), in 2019. His research interest includes network economics, game theory, and online learning.


Session Chair

Saptarshi Debroy


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.